Privacy as a Desideratum – Making the Exponential Mechanism Truthful with Payments
نویسنده
چکیده
In the last lecture, we saw (one instance of) how differential privacy can be used as a tool in mechanism design. We used it to design a prior-free near-revenue optimal digital goods auction. As this course continues, we will see several other instances in which privacy is used purely as a tool to solve other mechanism design problems, but today I want to shift gears and think of differential privacy itself as a goal of the mechanism designer. Why might a mechanism designer care about providing privacy for the participants in a mechanism? The simplest answer is that participants might not participate if their privacy is not protected. But why? – Shouldn’t we just model all concerns of the agents in their utility function? The issue is that this might be difficult to do. Ideally we can come up with a utility function that accurately models an agent’s gains in some restricted setting: for example, in an auction, we imagine the agent has some utility vi for receiving an item, which might accurately capture the agent’s true utility while interacting with the auctioneer. However, an agent might unbeknownst to the auctioneer be participating in some other interaction tomorrow. For example, the agent might be involved in a negotiation for some other good, from some other buyer. If the agent’s value for the good he is purchasing today is correlated with his value for the good he might be purchasing tomorrow, then he has a very real concern about how information leaked about his interaction today might affect his interaction tomorrow. We might worry that this unmodeled utility might substantially affect the agents incentives in the otherwise dominant strategy truthful mechanism that we are running today We can view privacy as a way of making mechanisms robust to unmodeled future utility that might be affected by agents’ participation in a mechanism today. Suppose that we are designing a mechanism M : T n → O, and we know that agents have utility functions ui (o) ≡ u(ti, o) over outcome space O. However, suppose that (unbeknownst to us), these agents will also experience some outcome oi ∈ O′ i tomorrow, that will be chosen in a way that might depend on the outcome that our mechanism chooses today. We make no assumptions about how this event is chosen – we imagine only that there is some (randomized) function fi(o) : O → O′ i that determines the distribution over outcomes o′ ∈ O′ i. In total, agents obtain utility ui(o, o ′) = ui (o) + u 2 i (o ′). We observe that if the mechanism we design M is dominant strategy truthful with respect to the known utility functions ui , then it remains -dominant strategy truthful even with respect to the unknown utility function ui. This is important, since we must always expect that our models are at most good approximations of people’s true utilities.
منابع مشابه
Privacy as a Tool for Mechanism Design ( for arbitrary objective functions ) Without Money 1 Introduction
Today we return to the idea of differential privacy as a tool to be wielded in mechanism design. We will prove a simple, but remarkable theorem that deviates from our normal intuition as mechanism designers. Typically, we think of social welfare as a special objective function – we can always use the VCG mechanism to optimize social welfare, but in general, there do not exist truthful mechanism...
متن کاملTruthfulness for the Sum of Weighted Completion Times
We consider the problem of designing truthful mechanisms for scheduling selfish tasks on a single machine or on a set of m parallel machines. The objective of every selfish task is the minimization of its completion time while the aim of the mechanism is the minimization of the sum of weighted completion times. For the model without payments, we prove that there is no (2 − )-approximate determi...
متن کاملTwo Dimensional Optimal Mechanism Design for a Sequencing Problem
We consider optimal mechanism design for a sequencing problem with n jobs which require a compensation payment for waiting. The jobs’ processing requirements as well as unit costs for waiting are private data. Given public priors for this private data, we seek to find an optimal mechanism, that is, a scheduling rule and incentive compatible payments that minimize the total expected payments to ...
متن کاملExpanding “Choice” in School Choice
Truthful revelation of preferences has emerged as a desideratum in the design of school choice programs. Gale-Shapley’s deferred acceptance mechanism is strategy-proof for students but limits their ability to communicate their preference intensities. This results in ex-ante inefficiency when ties at school preferences are broken randomly. We propose a variant of deferred acceptance mechanism wh...
متن کاملLower Bound for Envy-Free and Truthful Makespan Approximation on Related Machines
We study problems of scheduling jobs on related machines so as to minimize the makespan in the setting where machines are strategic agents. In this problem, each job j has a length lj and each machine i has a private speed ti. The running time of job j on machine i is tilj . We seek a mechanism that obtains speed bids of machines and then assign jobs and payments to machines so that the machine...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2014